twitter thread
How You Can Tell the AI Images of Trump's Arrest Are Deepfakes
The viral AI-generated images of Donald Trump's arrest you may be seeing on social media are definitely fake. But some of these photorealistic creations are pretty convincing. Others look more like stills from a video game or a lucid dream. A Twitter thread by Eliot Higgins, a founder of Bellingcat, that shows Trump getting swarmed by synthetic cops, running around on the lam, and picking out a prison jumpsuit was viewed over 3 million times on the social media platform. What does Higgins think viewers can do to tell the difference between fake, AI images, like the ones in his post, from real photographs that may come out of the former president's potential arrest?
333+ Twitter Thread ChatGPT Prompts
Are you tired of staring at a blank screen, trying to create the perfect Twitter thread to engage your followers? Do you want to take your Twitter game to the next level? Well, we've got just the thing for you! Introducing 333 Twitter thread ChatGPT prompts resource - the ultimate toolkit to help you create engaging, share-worthy Twitter threads that will leave your followers begging for more. With my customizable prompts, you can effortlessly create eye-catching threads to grab your followers' attention.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Rumour detection using graph neural network and oversampling in benchmark Twitter dataset
Patel, Shaswat, Bansal, Prince, Kaur, Preeti
Recently, online social media has become a primary source for new information and misinformation or rumours. In the absence of an automatic rumour detection system the propagation of rumours has increased manifold leading to serious societal damages. In this work, we propose a novel method for building automatic rumour detection system by focusing on oversampling to alleviating the fundamental challenges of class imbalance in rumour detection task. Our oversampling method relies on contextualised data augmentation to generate synthetic samples for underrepresented classes in the dataset. The key idea exploits selection of tweets in a thread for augmentation which can be achieved by introducing a non-random selection criteria to focus the augmentation process on relevant tweets. Furthermore, we propose two graph neural networks(GNN) to model non-linear conversations on a thread. To enhance the tweet representations in our method we employed a custom feature selection technique based on state-of-the-art BERTweet model. Experiments of three publicly available datasets confirm that 1) our GNN models outperform the the current state-of-the-art classifiers by more than 20%(F1-score); 2) our oversampling technique increases the model performance by more than 9%;(F1-score) 3) focusing on relevant tweets for data augmentation via non-random selection criteria can further improve the results; and 4) our method has superior capabilities to detect rumours at very early stage.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Philippines > Luzon > Central Luzon > Province of Bataan (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- (8 more...)
- Media > News (0.88)
- Health & Medicine (0.69)
- Information Technology > Services (0.68)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
What's Truly Disturbing About Those Sci-Fi Avatars That Are Suddenly Everywhere
Over the past week, many of our Instagram and Twitter feeds have been filled with avatars of our friends looking like heroes in a cyberpunk thriller. With their Day-Glo pink hair, shimmery skin, and Mad Max–meets–Ren Faire corsets, many of these profile pics looked so good that it was tempting to download the Lensa app and splurge on 100 A.I.-generated portraits of one's own. But by the time TMZ built its slideshow of the best celebrity Lensa looks, the tide had turned. Yes, the app, which continues to top Apple's free apps chart (users are prompted within Lensa to pay for the avatars), was succeeding in making hot people hotter and helping the selfie-averse find their superhero inside them. But there was also the matter of theft.
- North America > United States > California (0.05)
- North America > United States > Arizona (0.05)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.05)
Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art.
For many online, Lensa AI is a cheap, accessible profile picture generator. But in digital art circles, the popularity of artificial intelligence-generated art has raised major privacy and ethics concerns. Lensa, which launched as a photo editing app in 2018, went viral last month after releasing its "magic avatars" feature. It uses a minimum of 10 user-uploaded images and the neural network Stable Diffusion to generate portraits in a variety of digital art styles. Social media has been flooded with Lensa AI portraits, from photorealistic paintings to more abstract illustrations.
When Life Insurance Gives You AI, Should You Make Lemonade?
Advertisements about those methods often mention how customers can sign up for policies faster, file claims more efficiently, and get 24/7 assistance, all thanks to AI. However, a recent Twitter thread from Lemonade -- an insurance brand that uses AI -- sheds light on this practice's potential issues. People saw it, then decided the Lemonade AI approach highlights how technology may hurt and help, depending on its application. Many companies don't divulge details about how they use AI. The idea is that keeping the AI shrouded in mystery gives the impression of a futuristic offering while protecting a company's proprietary technology.
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Insurance (1.00)
- Government (0.99)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.96)
A disturbing, viral Twitter thread reveals how AI-powered insurance can go wrong
Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model -- and fend off serious accusations of bias, discrimination, and general creepiness -- ever since. The prospect of being judged by AI for something as important as an insurance claim was alarming to many who saw the thread, and it should be. We've seen how AI can discriminate against certain races, genders, economic classes, and disabilities, among other categories, leading to those people being denied housing, jobs, education, or justice. Now we have an insurance company that prides itself on largely replacing human brokers and actuaries with bots and AI, collecting data about customers without them realizing they were giving it away, and using those data points to assess their risk.
This is how Stadia lost one of its most anticipated indie games
Here's how it went down in public: Terraria co-creator and Re-Logic CEO Andrew Spinks published a Twitter thread early Monday morning accusing Google of suddenly, unjustifiably suspending his studio's YouTube, Gmail, Drive and Play accounts. He said he had been locked out of 15 years of resources for nearly a month, even though he had never violated Google's rules, and the company was refusing to clarify the situation. So, Spinks canceled the Google Stadia edition of Terraria, a beloved 2011 indie game with an audience of more than 30 million players. Re-Logic hadn't even announced the Stadia version of Terraria yet, though rumors had recently hit the message boards and fans were getting excited about a new way to play. Spinks tweeted, "I absolutely have not done anything to violate your terms of service, so I can take this no other way than you deciding to burn this bridge. I absolutely have not done anything to violate your terms of service, so I can take this no other way than you deciding to burn this bridge. Now, here's how it went down behind the scenes, as described to Engadget by a Re-Logic spokesperson: In mid-January, the parent account for Re-Logic's Google services, Demilogic, received a notice from YouTube saying it was in violation of the site's policies. "This was quite a bit confusing to us," the spokesperson said. Developers hadn't uploaded anything to the Re-Logic YouTube channel in three months, and no one in their community had alerted them to new or offensive content on the account. Google didn't suspend the Re-Logic YouTube channel right away. Instead, its initial email read, "We know that you may not have realized this was a violation of our policies, so we are not applying a strike to your channel.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Games > Computer Games (0.72)
How do you control an AI as powerful as OpenAI's GPT-3?
The world has a new AI toy, and it's called GPT-3. The latest iteration of OpenAI's text generating model has left many starstruck by its abilities – although its hype may be too much. GPT-3 is a machine learning system that has been fed 45TB of text data, an unprecedented amount. All that training allows it to generate sorts of written content: stories, code, legal jargon, all based on just a few input words or sentences. And the beta test has already produced some jaw-dropping results.
- Law (0.67)
- Information Technology (0.48)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.79)
r/MachineLearning - [D] Towards modular and programmable architecture search
To appear at NeurIPS 2019. Modular and programmable architecture search framework that allows you to implement your own search spaces and search algorithms through a consistent API. Reading the Twitter thread will give you a pretty good idea of the main ideas. Looking to get a few initial users and feedback.